Learning hidden Markov models from aggregate observations

نویسندگان

چکیده

In this paper, we propose an algorithm for estimating the parameters of a time-homogeneous hidden Markov model (HMM) from aggregate observations. This problem arises when only population level counts number individuals at each time step are available, and one seeks to learn individual HMM these Our is built upon classical expectation–maximization recently proposed inference (Sinkhorn belief propagation). We present parameter learning two different settings HMMs: with discrete observations continuous observations, exhibits convergence guarantees in both cases. Moreover, our framework naturally reduces standard Baum–Welch HMMs size 1. The efficacy demonstrated through several numerical experiments.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Hidden Quantum Markov Models

Hidden Quantum Markov Models (HQMMs) can be thought of as quantum probabilistic graphical models that can model sequential data. We extend previous work on HQMMs with three contributions: (1) we show how classical hidden Markov models (HMMs) can be simulated on a quantum circuit, (2) we reformulate HQMMs by relaxing the constraints for modeling HMMs on quantum circuits, and (3) we present a lea...

متن کامل

Learning Imprecise Hidden Markov Models

Consider a stationary precise hidden Markov model (HMM) with n hidden states Xk, taking values xk in a set {1, . . . ,m} and n observations Ok, taking values ok. Both the marginal model pX1(x1), the emission models pOk|Xk(ok|xk) and the transition models pXk|Xk−1(xk|xk−1) are unknown. We can then use the Baum–Welch algorithm [see, e.g., 4] to get a maximum-likelihood estimate of these models. T...

متن کامل

Ensemble Learning for Hidden Markov Models

The standard method for training Hidden Markov Models optimizes a point estimate of the model parameters. This estimate, which can be viewed as the maximum of a posterior probability density over the model parameters, may be susceptible to over-tting, and contains no indication of parameter uncertainty. Also, this maximummay be unrepresentative of the posterior probability distribution. In this...

متن کامل

Online Learning in Discrete Hidden Markov Models

We present and analyze three different online algorithms for learning in discrete Hidden Markov Models (HMMs) and compare their performance with the Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of the generalization error we draw learning curves in simplified situations and compare the results. The performance for learning drifting concepts of one of the presented...

متن کامل

Learning Hidden Markov Models with Geometric Information

Hidden Markov models (hmms) and partially observable Markov decision processes (pomdps) provide a useful tool for modeling dynamical systems. They are particularly useful for representing environments such as road networks and ooce buildings, which are typical for robot navigation and planning. In a previous paper SK97] we have empirically shown that by taking advantage of readily available odo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Automatica

سال: 2022

ISSN: ['1873-2836', '0005-1098']

DOI: https://doi.org/10.1016/j.automatica.2021.110100